computational time
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Data Science (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.67)
Physics-Informed Neural Network Frameworks for the Analysis of Engineering and Biological Dynamical Systems Governed by Ordinary Differential Equations
Whitman, Tyrus, Particka, Andrew, Diers, Christopher, Griffin, Ian, Wickramasinghe, Charuka, Ranaweera, Pradeep
In this study, we present and validate the predictive capability of the Physics-Informed Neural Networks (PINNs) methodology for solving a variety of engineering and biological dynamical systems governed by ordinary differential equations (ODEs). While traditional numerical methods a re effective for many ODEs, they often struggle to achieve convergence in problems involving high stiffness, shocks, irregular domains, singular perturbations, high dimensions, or boundary discontinuities. Alternatively, PINNs offer a powerful approach for handling challenging numerical scenarios. In this study, classical ODE problems are employed as controlled testbeds to systematically evaluate the accuracy, training efficiency, and generalization capability under controlled conditions of the PINNs framework. Although not a universal solution, PINNs can achieve superior results by embedding physical laws directly into the learning process. We first analyze the existence and uniqueness properties of several benchmark problems and subsequently validate the PINNs methodology on these model systems. Our results demonstrate that for complex problems to converge to correct solutions, the loss function components data loss, initial condition loss, and residual loss must be appropriately balanced through careful weighting. We further establish that systematic tuning of hyperparameters, including network depth, layer width, activation functions, learning rate, optimization algorithms, w eight initialization schemes, and collocation point sampling, plays a crucial role in achieving accurate solutions. Additionally, embedding prior knowledge and imposing hard constraints on the network architecture, without loss the generality of the ODE system, significantly enhances the predictive capability of PINNs.
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
All reviewers commented on that it would have been interesting to see results on real-world datasets
We are very happy to hear that all reviewers found the paper interesting and well written. Below we provide point-by-point responses to selected comments. All reviewers commented on that it would have been interesting to see results on real-world datasets. "controlled way" results in a cleaner and more informative numerical evaluation. The distribution-free bounds are typically between 0.99 and 1, We will add these additional experimental results to the revised supplementary material.
point-by-point response to reviewers ' concerns. 2 Reviewer # 1 3 " Analysis of wall-clock time breakdown ": 4 We will move the wall-clock time breakdown analysis into the main text of the paper
We thank the reviewers for the generous comments and invaluable feedback on the manuscript. We will move the wall-clock time breakdown analysis into the main text of the paper. "Literature of architectures that are explicitly easy to distribute": Thank you for pointing out the related work, which we will cite in the camera-ready version. We will include a brief overview and figures for the architectures considered here in the appendix. Thank you very much for the suggestions.
Supplementary Material for: Adversarial Regression with Doubly Non-negative Weighting Matrices
In this supplementary, we give details for the proofs of our technical results in Section A. In Section B, In Section C, we illustrate additional empirical results. Proposition 3.4, we begin by computing the support function of the convex cone of symmetric positive Tr null A Ω null 0. Therefore, the desired result follows.For Proposition 3.4 Proof of Proposition 3.4. Thus, the infimum in problem (A.2) can be restricted to γ > 0. This observation completes the proof. The following elementary fact is well known. For completeness, we include a proof here.
Neural-Network Chemical Emulator for First-Star Formation: Robust Iterative Predictions over a Wide Density Range
Ono, Sojun, Sugimura, Kazuyuki
We present a neural-network emulator for the thermal and chemical evolution in Population III star formation. The emulator accurately reproduces the thermochemical evolution over a wide density range spanning 21 orders of magnitude (10$^{-3}$-10$^{18}$ cm$^{-3}$), tracking six primordial species: H, H$_2$, e$^{-}$, H$^{+}$, H$^{-}$, and H$_2^{+}$. To handle the broad dynamic range, we partition the density range into five subregions and train separate deep operator networks (DeepONets) in each region. When applied to randomly sampled thermochemical states, the emulator achieves relative errors below 10% in over 90% of cases for both temperature and chemical abundances (except for the rare species H$_2^{+}$). The emulator is roughly ten times faster on a CPU and more than 1000 times faster for batched predictions on a GPU, compared with conventional numerical integration. Furthermore, to ensure robust predictions under many iterations, we introduce a novel timescale-based update method, where a short-timestep update of each variable is computed by rescaling the predicted change over a longer timestep equal to its characteristic variation timescale. In one-zone collapse calculations, the results from the timescale-based method agree well with traditional numerical integration even with many iterations at a timestep as short as 10$^{-4}$ of the free-fall time. This proof-of-concept study suggests the potential for neural network-based chemical emulators to accelerate hydrodynamic simulations of star formation.
- North America > United States (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Asia > Japan > Hokkaidō > Hokkaidō Prefecture > Sapporo (0.04)
Greedy Subspace Clustering
Dohyung Park, Constantine Caramanis, Sujay Sanghavi
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets to estimate the subspaces. As the geometric structure of the clusters (linear subspaces) forbids proper performance of general distance based approaches such as K -means, many model-specific methods have been proposed. In this paper, we provide new simple and efficient algorithms for this problem. Our statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces. These conditions are weaker than those considered in the standard statistical literature. Experimental results on synthetic data generated from the standard unions of subspaces model demonstrate our theory. We also show that our algorithm performs competitively against state-of-the-art algorithms on real-world applications such as motion segmentation and face clustering, with much simpler implementation and lower computational cost.